Search Results: "mira"

5 December 2016

Shirish Agarwal: The Anti-Pollito squad arrest and confession

Disclaimer This is an attempt at humor and hence entirely fictional in nature. While some incidents depicted are true, the context and the story woven around them are by yours truly. None of the Mascots of Debian were hurt during the blog post . I also disavow any responsibility for any hurt (real or imagined) to any past, current and future mascots. The attempt should not be looked upon as demeaning people who are accused of false crimes, tortured and confessions eked out of them as this happens quite a lot (In India for sure, but guess it s the same world over in various degrees). The idea is loosely inspired by Chocolate:Deep Dark Secrets. (2005) On a more positive note, let s start Being a Sunday morning woke up late to find incessant knocking on the door, incidentally mum was not at home. Opening the door, found two official looking gentleman. They asked my name, asked my credentials, tortured and arrested me for Group conspiracy of Malicious Mischief in second and third degrees . The torture was done by means of making me forcefully watch endless reruns of Norbit . While I do love Eddie Murphy, this was one of his movies he could have done without . I guess for many people watching it once was torture enough. I *think* they were nominated for razzie awards dunno if they won it or not, but this is beside the point. Unlike the 20 years it takes for a typical case to reach to its conclusion even in the smallest court in India, due to the torture, I was made to confess (due to endless torture) and was given summary judgement. The judgement was/is as follows a. Do 100 hours of Community service in Debian in 2017. This could be done via blog posts, raising tickets in the Debian BTS or in whichever way I could be helpful to Debian. b. Write a confessional with some photographic evidence sharing/detailing some of the other members who were part of the conspiracy in view of the reduced sentence. So now, have been forced to write this confession As you all know, I won a bursary this year for debconf16. What is not known by most people is that I also got an innocuous looking e-mail titled Pollito for DPL . While I can t name all the names as investigation is still ongoing about how far-reaching the conspiracy is . The email was purportedly written by members of cabal within cabal which are in Debian. I looked at the email header to see if this was genuine and I could trace the origin but was left none the wiser, as obviously these people are far more technically advanced than to fall in simple tricks like this Anyways, secretly happy that I have been invited to be part of these elites, I did the visa thing, packed my bags and came to Debconf16. At this point in juncture, I had no idea whether it was real or I had imagined the whole thing. Then to my surprise saw this evidence of conspiracy to have Pollito as DPL, Wifi Password Just like the Illuminati the conspiracy was for all to see those who knew about it. Most people were thinking of it as a joke, but those like me who had got e-mails knew better. I knew that the thing is real, now I only needed to bide my time and knew that the opportunity would present itself. And few days later, sure enough, there was a trip planned for Table Mountain, Cape Town . Few people planned to hike to the mountain, while few chose to take the cable car till up the mountain. First glance of the cable car with table mountain as background Quite a few people came along with us and bought tickets for the to and fro to the mountain and back. Ticket for CPT Table mountain car cable Incidentally, I was thinking if the South African Govt. were getting the tax or not. If you look at the ticket, there is just a bar-code. In India as well as the U.S. there is TIN Tax Identification Number TIN displayed on an invoice from channeltimes.com Few links to share what it is all about . While these should be on all invoices, need to specially check when taking high-value items. In India as shared in the article the awareness, knowledge leaves a bit to be desired. While I m drifting from the incident, it would be nice if somebody from SA could share how things work there. Moving on, we boarded the cable car. It was quite spacious cable car with I guess around 30-40 people or some more who were able to see everything along with the controller. from inside the table mountain cable car 360 degrees It was a pleasant cacophony of almost two dozen or more nationalities on this 360 degrees moving chamber. I was a little worried though as it essentially is a bucket and there is always a possibility that a severe wind could damage it. Later somebody did share that some frightful incidents had occurred not too long ago on the cable car. It took about 20-25 odd minutes to get to the top of table mountain and we were presented with views such as below View from Table Mountain cable car looking down The picture I am sharing is actually when we were going down as all the pictures of going up via the cable car were over-exposed. Also, it was pretty crowded on the way up then on the way down so handling the mobile camera was not so comfortable. Once we reached up, the wind was blowing at incredible speeds. Even with my jacket and everything I was feeling cold. Most of the group around 10-12 people looked around if we could find a place to have some refreshments and get some of the energy in the body. So we all ventured to a place and placed our orders the bleh... Irish coffee at top of Table Mountain I was introduced to Irish Coffee few years back and have had some incredible Irish Coffees in Pune and elsewhere. I do hope to be able to make Irish Coffee at home if and when I have my own house. This is hotter than brandy and is perfect if you are suffering from cold etc if done right, really needs some skills. This is the only drink which I wanted in SA which I never got right . As South Africa was freezing for me, this would have been the perfect antidote but the one there as well as elsewhere were all bleh. What was interesting though, was the coffee caller besides it. It looked like a simple circuit mounted on a PCB board with lights, vibrations and RFID and it worked exactly like that. I am guessing as and when the order is ready, there is an interrupt signal sent via radio waves which causes the buzzer to light and vibrate. Here s the back panel if somebody wants to take inspiration and try it as a fun project backpanel of the buzz caller Once we were somewhat strengthened by the snacks, chai, coffee etc. we made our move to seeing the mountain. The only way to describe it is that it s similar to Raigad Fort but the plateau seemed to be bigger. The wikipedia page of Table Mountain attempts to share but I guess it s more clearly envisioned by one of the pictures shared therein. table mountain panaromic image I have to say while Table Mountain is beautiful and haunting as it has scenes like these Some of the oldest rocks known to wo/man. There is something there which pulls you, which reminds you of a long lost past. I could have simply sat there for hours together but as was part of the group had to keep with them. Not that I minded. The moment I was watching this, I was transported to some memories of the Himalayas about 20 odd years or so. In that previous life, I had the opportunity to be with some of the most beautiful women and also been in the most happening places, the Himalayas. I had shared years before some of my experiences I had in the Himalayas. I discontinued it as I didn t have a decent camera at that point in time. While I don t wanna digress, I would challenge anybody to experience the Himalayas and then compare. It is just something inexplicable. The beauty and the rawness that Himalayas shows makes you feel insignificant and yet part of the whole cosmos. What Paulo Cohello expressed in The Valkyries is something that could be felt in the Himalayas. Leh, Ladakh, Himachal , Garwhal, Kumaon. The list will go on forever as there are so many places, each more beautiful than the other. Most places are also extremely backpacker-friendly so if you ask around you can get some awesome deals if you want to spend more than a few days in one place. Moving on, while making small talk @olasd or Nicolas Dandrimont , the headmaster of our trip made small talk to each of us and eked out from all of us that we wanted to have Pollito as our DPL (Debian Project Leader) for 2017. Few pictures being shared below as supporting evidence as well The Pollito as DPL cabal in action members of the Pollito as DPL where am I or more precisely how far am I from India. While I do not know who further up than Nicolas was on the coup which would take place. The idea was this If the current DPL steps down, we would take all and any necessary actions to make Pollito our DPL. Pollito going to SA - photo taken by Jonathan Carter This has been taken from Pollito s adventure Being a responsible journalist, I also enquired about Pollito s true history as it would not have been complete without one. This is the e-mail I got from Gunnar Wolf, a friend and DD from Mexico
Turns out, Valessio has just spent a week staying at my house And
in any case, if somebody in Debian knows about Pollito s
childhood That is me. Pollito came to our lives when we went to Congreso Internacional de
Software Libre (CISOL) in Zacatecas city. I was strolling around the
very beautiful city with my wife Regina and our friend Alejandro
Miranda, and at a shop at either Ram n L pez Velarde or Vicente
Guerrero, we found a flock of pollitos. http://www.openstreetmap.org/#map=17/22.77111/-102.57145 Even if this was comparable to a slave market, we bought one from
them, and adopted it as our own. Back then, we were a young couple Well, we were not that young
anymore. I mean, we didn t have children. Anyway, we took Pollito with
us on several road trips, such as the only time I have crossed an
international border driving: We went to Encuentro Centroamericano de
Software Libre at Guatemala city in 2012 (again with Alejandro), and
you can see several Pollito pics at: http://gwolf.org/album/road-trip-ecsl-2012-guatemala-0 Pollito likes travelling. Of course, when we were to Nicaragua for
DebConf, Pollito tagged along. It was his first flight as a passenger
(we never asked about his previous life in slavery; remember, Pollito
trust no one). Pollito felt much welcome with the DebConf crowd. Of course, as
Pollito is a free spirit, we never even thought about forcing him to
come back with us. Pollito went to Switzerland, and we agreed to meet
again every year or two. It s always nice to have a chat with him. Hugs!
So with that backdrop I would urge fellow Debianities to take up the slogans LONG LIVE THE DPL ! LONG LIVE POLLITO ! LONG LIVE POLLITO THE DPL ! The first step to make Pollito the DPL is to ensure he has a @debian.org (pollito@debian.org) We also need him to be made a DD because only then can he become a DPL. In solidarity and in peace
Filed under: Miscellenous Tagged: #caller, #confession, #Debconf16, #debian, #Fiction, #history, #Pollito, #Pollito as DPL, #Table Mountain, Cabal, memories, south africa

29 November 2016

Shirish Agarwal: The Iziko South African Museum

This would be a bit long on my stay in Cape Town, South Africa after Debconf16. Before I start, let me share the gallery works, you can see some photos that I have been able to upload to my gallery . It seems we are using gallery 2 while upstream had made gallery 3 and then it sort of died. I actually asked in softwarerecs stackexchange site if somebody knows of a drop-in replacement for gallery and was told/shared about Pwigo . I am sure the admin knows about it. There would be costs to probably migrate from gallery to Pwigo with the only benefit that it would be something which would perhaps be more maintainable. The issues I face with the current gallery system are few things a. There is no way to know how much your progress your upload has taken.
b. After it has submit, it gives a fake error message saying some error has occurred. This has happened on every occasion/attempt. Now I don t know whether it is because I have slow upload speeds or something else altogether. I had shared the error page last time in the blog post hence not sharing again. Although, all the pictures which would be shared in this blog post would be from the same gallery Another thing I would like to share is a small beginner article I wrote about why I like Debian. Another interesting/tit-bit of news I came to know few days back that both Singapore and Qatar have given 96 hours visa free stopovers for Indians for select destinations. Now to start with the story/experience due to some unknown miracle/angel looking upon me I got the chance to go to Debconf16, South Africa. I m sure there was lot of backend discussions but in the end I was given the opportunity to be part of Debcamp and Debconf. While I hope to recount my Debcamp and Debconf experience in another or two blog posts, this would be exclusively the Post-Debconf Experiences I had. As such opportunities to visit another country are rare, I wanted to make the most of it. Before starting from Pune, I had talked with Amey about Visas, about Debconf as he had just been to Debconf15 the year before and various things related to travel. He was instrumental in me having a bit more knowledge about how to approach things. I was also lucky to have both Graham and Bernelle who also suggested, advised and made it possible to have a pleasant stay both during Debcamp and Debconf. The only quibble is I didn t know heaters were being made available to us without any cost. Moving on, a day or two before Debconf was about to conclude, I asked Bernelle s help even though she was battling a burn-out I believe as I was totally clueless about Cape Town. She accepted my request and asked me to look at hostels near Longmarket Street. I had two conditions a. It should not be very far from the airport
b. It should be near to all or most cultural experiences the city has to offer. We looked at hostelworld and from the options listed, it looked like Homebasecapetown looked to be a perfect fit. It was one of the cheaper options and they also had breakfast included in the pricing. I booked through hostelworld for a mixed dorm for 2 days as I was unsure how it would be (the first night effect I have shared about previously) . When I reached there, I found it to be as good as the pictures shared were, the dorm was clean (most important), people were friendly (also important) as well as toilets and shower were also clean while the water was hot, so all in all it was a win-win situation for me. Posters I saw at homebasecapetown While I m not much of an adrenaline-junkie it was nice to know the activities that could be done/taken. Brochures and Condoms just left of main hall. This was again interesting. While apologies for the poor shaky quality of the picture, I believe it is easy to figure out. There were Brochures of the city attractions as well as condoms that people could discreetly use if need be. I had seen such condoms in few toilets during and around Debconf and it felt good that the public were aware and prioritizing safety for their guests and students instead of having fake holier than thou attitudes that many places have. For instance, you wouldn t find something like this in toilets of most colleges in India or anywhere else for that matter. There are few vending machines in what are termed as red light areas or where prostitution is known/infamous to happen and even then most times it is empty. I have 2-3 social workers as friends and they are a source of news on such things. While I went to few places and each had an attraction to it, the one which had my literally eyes out of socket was the Iziko South African Museum . I have been lucky to been quite a few museums in India, the best rated science museum in India in my limited experience has been the Visvesvaraya Industrial & Technological Museum, Bengaluru India . A beer from me if a European can get it right. Don t worry if you mispronounce it, I mispronounce it couple of times till I get it right . Looking up the word Iziko the meaning of the word seems to be the hearth and if you look at the range of collections in the museum, you would think it fits. I was lucky to find couple of friends, one of whom was living at homebase and we decided to go to the museum together. Making friends on the road So Eduardo, my friend on the left and his friend, we went to the museum. While viewing the museum, there were no adjectives to describe it other than Wow and Endless . See fossils of fish-whale-shark ? OR Giant fish-whale-dolphin-shark some million years ago. and Reminder of JAWS ;) While I have more than a few pictures, the point is easily made. It seems almost inconceivable that creatures of such masses actually were on earth. While I played with the model of the jaws of a whale/shark in reality if something like that happened, I would have been fighting for my life. The only thing I missed or could have been better if they had some interactive installations to showcase the now universally accepted Charles Darwin s On the Origin of Species I had never seen anything like this. Sadly, there was nobody around to help us figure out things as I had read that most species of fish don t leave a skeleton behind so how were these models made? It just boggles the mind. Apart from the Science Museum I was also introduced to the bloody history that South Africa had. I saw The 1913 native land act which was not honored . I had been under the impression that India had got a raw deal when it was under British rule but looking at South African history I don t know. While we got our freedom in 1947 they got rid of apartheid about 20 years+ . I talked to lot of young African males and there was lot of naked hostility for the Europeans even today. It was a bit depressing but could relate to their point of view as similar sentiments were echoed by our forefathers. I read in the newspapers and it seemed to be a pretty mixed picture. I can t comment as only South Africans can figure out the way forward. For me, it was enough to know and see that we both had similar political histories as nations. It seemed the racial divide and anger was much more highly pronounced towards Europeans and divisive then the caste divisions here between Indians. I also shared with them my limited knowledge and understanding of the Indian history (as history is re-written all the time) and it was clear to them that we had common/similar pasts. As a result, what was surprising (actually not) is that many South Africans have no knowledge of Indian history. as well otherwise the political differences that South Africa and India has in the current scenario wouldn t have been. In the end, the trip proved to be fun, stimulating, educative, thought-provoking as questions about self-identity , national identity, our place in the Universe kinda questions which should be asked all the time. Thank you Bremmer and the team for letting me experience Cape Town, South Africa, I would have been poorer if I hadn t had the experience.
Filed under: Miscellenous Tagged: #Debconf16, #Dinosaur Fishes, #gallery, #Identity, #Iziko South African Museum, #Nation-state Identity, #pwigo

5 November 2016

Russ Allbery: Review: The Just City

Review: The Just City, by Jo Walton
Series: Thessaly #1
Publisher: Tor
Copyright: 2014
Printing: January 2015
ISBN: 0-7653-3266-3
Format: Hardcover
Pages: 368
The premise for The Just City is easy to state: The time-traveling goddess Athene (Athena) decides to organize and aid an attempt to create the society described in Plato's Republic. She chooses Thera (modern Santorini) before the eruption, as a safe place where this experiment wouldn't alter history. The elders of the city are seeded by people throughout history who at some point prayed to Athene, wanting to live in the Republic. The children of the age of ten that Plato suggested starting with are purchased as slaves from various points in history and transported by Athene to the island. Apollo, shaken and confused by Daphne wanting to turn into a tree rather than sleep with him, finds out about this experiment as Athene tries to explain the concept of consent to him. He decides that becoming human for a while might help him learn about volition and equal significance, and that this is the perfect location. He's one of the three viewpoint characters. The others are two women: one (Maia) from Victorian times who prayed to Athene in a moment of longing for the tentative sexual equality of the Republic and was recruited as one of the elders, and another (Simmea) who is bought as a slave and becomes one of the children. I should admit up-front that I've never read Plato's Republic, or indeed much of Plato at all, just small bits for classes. The elders (and of course the gods) all have, and are attempting to stick quite closely to Plato's outline of the ideal city. The children haven't, though, so the book is quite readable for people like me who only remember a few vague aspects of Plato's vision from school. The reader learns the principles alongside Simmea. One of Walton's strengths is taking a science fiction concept, putting real people into it, and letting the quotidian mingle with the fantastic. Simmea is my favorite character here: her journey to the city is deeply traumatic, but the opportunity she gets there is incredible and unforeseen, and she comes to love the city while still understanding, and arguing about, its possible flaws. Maia is nearly as successful; Walton does a good job with committee debates and discussions, avoids coming down too heavily on the drama, and shows a believable picture of people with very different backgrounds and beliefs coming together to flesh out the outlines of something they all agree with, or at least want to try. I found Apollo less engaging as a character, partly because I never quite understood his motives or his weird failure to understand the principles of consent. Walton doesn't portray him as either hopelessly arrogant or hopelessly narcissistic, which would have been easy outs, but in avoiding those two obvious explanations for his failures of empathy, I felt like she left him with an odd and unexplained hole in his personality. He's a weirdly passive half-character for much of the book, although he does develop a bit more towards the end (which was probably the point). Half the fun of this book is working out what the Republic would be like in practice, and what breakdowns and compromises would happen as soon as you put real people in it. Athene obviously has to do a bit of cheating to make a utopia invented as an intellectual exercise work out in practice, plus a bit more for comfort (electricity and indoor plumbing, for instance). The most substantial cheat is robots to replace slaves and do quite a bit that slaves couldn't. Birth control (something Plato obviously never would have thought of) is another notable cheat; it's postulated to be an ancient method since lost, but even if that existed, there's no way it would be this reliable. But otherwise, the society mostly works, and Walton shows enough of the arguing and mechanics to make that believable, while still avoiding infodumps and boring descriptions. It's neatly done, although I'm still a bit dubious that the elders from later eras would have put up with the primitive conditions with this little complaint. The novel needs a plot, of course, and that's the other half of the fun. I can't talk about this in any detail without spoiling the book, since the plot only kicks in about halfway through once the setup and character introductions are complete. That makes it hard to explain why I found this a bit less successful, although parts of it are brilliant. What worked for me is the growth of Simmea and her friends as students and philosophers, the arguments and discussions (and their growing enthusiasm for argument and discussion), and the way Greek mythology is woven subtly and undramatically into the story. It really does feel like sitting in on ancient Greek philosophical arguments and experiments, and by that measure Walton has succeeded admirably in her goal. What didn't work for me was the driving conflict of the story, once it's introduced. I can't describe it without spoilers, but it's an old trope in science fiction and one with little scientific basis. It may seem weird to argue that point in a book with time-traveling Greek gods, a literal Lethe, and a Greek idea of souls, but those are mythological background material. The SF trope is something about which I have personal expertise and which simply doesn't work that way, and I had a harder time getting past that than alternate metaphysical properties. It threw me out of the book a bit. I see why Walton chose the conflict she did, but I felt like she could have gotten to the same place in the plot, admittedly with more difficulty, by using some of the more dubious aspects of Plato's long-term plan plus some other obstacles that were already built into the world. This more direct approach added a bit of SF-style analysis of the unknown that seemed weirdly at odds with the rest of the story (even if the delight of one of the characters is endearing). That complaint aside, I really enjoyed reading this book. Apollo didn't entirely work for me, but all of the other characters are excellent, and Walton keeps the story moving at a comfortable clip. Given the amount of description required, particularly for an audience that may not have read the Republic, a lesser writer could have easily slipped into the infodump trap. Walton never does. Fair warning, though: The Just City does end on a cliffhanger, and is in no way a standalone novel. You will probably want to have the sequel on hand. Followed by The Philosopher Kings. Rating: 7 out of 10

8 October 2016

Charles Plessy: I just finished to read the Imperial Radch trilogy.

I liked it a lot. There are already many comments on Internet (thanks Russ for making me discover these novels), so I will not go into details. And it is hard to summarise without spoiling. In brief: The first tome, Ancillary Justice, makes us visit various worlds and cultures, and give us an impression of what it feels to be a demigod. The main culture does not make a difference between the two sexes, and the grammar of its language does not have genders. This gives an original taste to the story, for instance when the hero speaks a foreign language, he has difficulties to correctly address people without risking to frown them. Unfortunately the English language itself does not use gender very much, so the literary effect is a bit weakened. Perhaps the French translation (which I have not read) could be more interesting in that respect? The second tome, Ancillary Sword, shows us how one can communicate things in a surveillance society without privacy, by subtle variations on how to serve tea. Gallons of tea are drunk in this tome, of which the main interest is the relation between the characters and heir conversations. In the third tome, Ancillary Mercy, asks the question of what makes us human. Among the most interesting characters, there is a kind of synthetic human, who acts as ambassador for an alien race. At first, he indeed behaves completely alien, but in the end, he is not very different from a newborn who would happen by miracle to know how to speak: in the beginning the World makes no sense, but step by step and by experimenting, he deduces how it works. This is how this character ends up understanding that what is called "war" is a complex phenomenon where one of the consequences is a shortage of fish sauce. I was a bit surprised that no book lead us at the heart of the Radch empire, but I just see on Wikipedia that one more novel is in preparation... One can speculate that central Radch resembles to a future dystopian West, in which surveillance of everybody is total and constant, but where people think they are happy, and peace and well-being inside are kept possible thanks to military operations outside, mostly performed by killer robots controlled by artificial intelligences. A not so distant future ? It is a matter of course that there does not seem to by any Free software in the Radch empire. That reminds me that I did not contribute much to Debian while I was reading...

11 June 2016

Paul Tagliamonte: It's all relative

As nearly anyone who's worked with me will attest to, I've long since touted nedbat's talk Pragmatic Unicode, or, How do I stop the pain? as one of the most foundational talks, and required watching for all programmers. The reason is because netbat hits on something bigger - something more fundamental than how to handle Unicode -- it's how to handle data which is relative. For those who want the TL;DR, the argument is as follows: Facts of Life:
  1. Computers work with Bytes. Bytes go in, Bytes go out.
  2. The world needs more than 256 symbols.
  3. You need both Bytes and Unicode
  4. You cannot infer the encoding of bytes.
  5. Declared encodings can be Wrong
Now, to fix it, the following protips:
  1. Unicode sandwich
  2. Know what you have
  3. TEST
Relative Data I've started to think more about why we do the things we do when we write code, and one thing that continues to be a source of morbid schadenfreude is watching code break by failing to handle Unicode right. It's hard! However, watching what breaks lets you gain a bit of insight into how the author thinks, and what assumptions they make. When you send someone Unicode, there are a lot of assumptions that have to be made. Your computer has to trust what you (yes, you!) entered into your web browser, your web browser has to pass that on over the network (most of the time without encoding information), to a server which reads that bytestream, and makes a wild guess at what it should be. That server might save it to a database, and interpolate it into an HTML template in a different encoding (called Mojibake), resulting in a bad time for everyone involved. Everything's awful, and the fact our computers can continue to display text to us is a goddamn miracle. Never forget that. When it comes down to it, when I see a byte sitting on a page, I don't know (and can't know!) if it's Windows-1252, UTF-8, Latin-1, or EBCDIC. What's a poem to me is terminal garbage to you. Over the years, hacks have evolved. We have magic numbers, and plain ole' hacks to just guess based on the content. Of course, like all good computer programs, this has lead to its fair share of hilarious bugs, and there's nothing stopping files from (validly!) being multiple things at the same time. Like many things, it's all in the eye of the beholder. Timezones Just like Unicode, this is a word that can put your friendly neighborhood programmer into a series of profanity laden tirades. Go find one in the wild, and ask them about what they think about timezone handling bugs they've seen. I'll wait. Go ahead. Rants are funny things. They're fun to watch. Hilarious to give. Sometimes just getting it all out can help. They can tell you a lot about the true nature of problems. It's funny to consider the isomorphic nature of Unicode rants and Timezone rants. I don't think this is an accident. U n i c o d e timezone Sandwich Ned's Unicode Sandwich applies -- As early as we can, in the lowest level we can (reading from the database, filesystem, wherever!), all datetimes must be timezone qualified with their correct timezone. Always. If you mean UTC, say it's in UTC. Treat any unqualified datetimes as "bytes". They're not to be trusted. Never, never, never trust 'em. Don't process any datetimes until you're sure they're in the right timezone. This lets the delicious inside of your datetime sandwich handle timezones with grace, and finally, as late as you can, turn it back into bytes (if at all!). Treat locations as tzdb entries, and qualify datetime objects into their absolute timezone (EST, EDT, PST, PDT) It's not until you want to show the datetime to the user again should you consider how to re-encode your datetime to bytes. You should think about what flavor of bytes, what encoding -- what timezone -- should I be encoding into? TEST Just like Unicode, testing that your code works with datetimes is important. Every time I think about how to go about doing this, I think about that one time that mjg59 couldn't book a flight starting Tuesday from AKL, landing in HNL on Monday night, because United couldn't book the last leg to SFO. Do you ever assume dates only go forward as time goes on? Remember timezones. Construct test data, make sure someone in New Zealand's +13:45 can correctly talk with their friends in Baker Island's -12:00, and that the events sort right. Just because it's Noon on New Years Eve in England doesn't mean it's not 1 AM the next year in New Zealand. Places a few miles apart may go on Daylight savings different days. Indian Standard Time is not even aligned on the hour to GMT (+05:30)! Test early, and test often. Memorize a few timezones, and challenge your assumptions when writing code that has to do with time. Don't use wall clocks to mean monotonic time. Remember there's a whole world out there, and we only deal with part of it. It's also worth remembering, as Andrew Pendleton pointed out to me, that it's possible that a datetime isn't even unique for a place, since you can never know if 2016-11-06 01:00:00 in America/New_York (in the tzdb) is the first one, or second one. Storing EST or EDT along with your datetime may help, though! Pitfalls Improper handling of timezones can lead to some interesting things, and failing to be explicit (or at least, very rigid) in what you expect will lead to an unholy class of bugs we've all come to hate. At best, you have confused users doing math, at worst, someone misses a critical event, or our security code fails. I recently found what I regard to be a pretty bad bug in apt (which David has prepared a fix for and is pending upload, yay! Thank you!), which boiled down to documentation and code expecting datetimes in a timezone, but accepting any timezone, and silently treating it as UTC. The solution is to hard-fail, which is an interesting choice to me (as a vocal fan of timezone aware code), but at the least it won't fail by misunderstanding what the server is trying to communicate, and I do understand and empathize with the situation the apt maintainers are in. Final Thoughts Overall, my main point is although most modern developers know how to deal with Unicode pain, I think there is a more general lesson to learn -- namely, you should always know what data you have, and always remember what it is. Understand assumptions as early as you can, and always store them with the data.

11 April 2016

Thomas Goirand: Announcing validated Debian packages for Mitaka

Greetings! This is a (4 days delay) copy of the announce I made on the openstack-dev@lists.openstack.org on the 8th of April 2016. I am overjoyed, thrilled and delighted to announce the release of the Debian packages for Mitaka. All of the DefCore packages were validated successfully this morning through our package-only-based Tempest CI. Content of this release
This release includes the following 23 services:
aodh 2.0.0
barbican 2.0.0
ceilometer 6.0.0
cinder 8.0.0
congress 3.0.0+dfsg1
designate 2.0.0
glance 12.0.0
gnocchi 2.0.2
heat 6.0.0
horizon 9.0.0
ironic 5.1.0
keystone 9.0.0
magnum 2.0.0
manila 2.0.0
mistral 2.0.0
murano 2.0.0
neutron 8.0.0
nova 13.0.0
trove 5.0.0
sahara 4.0.0
senlin 1.0.0
swift 2.7.0
zaqar 2.0.0 Where to find these packages
1/ Sid
All of Mitaka was uploaded to Debian Sid this week. You can use Debian Sid directly to use them. 2/ Official jessie-backports
As soon as everything migrates to Debian Testing (currently aka: Stretch), in 5 days if no RC bug is reported, it will be possible to upload all of Mitaka to the Debian official jessie-backports. 3/ Non-official Jessie and Trusty backports
In the meantime, the packages are available through Mirantis Jenkins automatic Debian Jessie backport repository. The full sources.list is available here: http://mitaka-jessie.pkgs.mirantis.com/ You can use the Trusty backports as well: http://mitaka-trusty.pkgs.mirantis.com/ To use these repositories, simply add the described sources.list to (for example) /etc/apt/sources.list.d/openstack.list, and run apt-get update. If you want to install the GPG key of the repositories, you can either install the mitaka-jessie-archive-keyring or mitaka-trusty-archive-keyring package (depending on your distribution of choice). Alternatively apt-key add the public key available at /debian/dists/pukey.gpg in these repositories. As a reminder, the URLs above contain the word Mirantis only because the service is sponsored by my employer. These repositories are straight backports from what is available in Debian Sid, without any modification. Remember that the packages listed below are maintained separately in Debian and Ubuntu, and therefore, packages are different in these distributions:
aodh, barbican, ceilometer, cinder, designate, glance, heat, horizon, ironic, keystone, manila, neutron, nova, trove, swift. All other packages (including all OpenStack libraries like Oslo and python-*clients) are maintained in Debian, with the contribution of Canonical, and then synced to Ubuntu, so they are the exact same packages (or at least, with a minimal difference). I hope we can further improve collaboration between Debian and Canonical during the Newton cycle. Bug reporting
As always, bug reports are welcome, and considered as high value contributions. Please follow the instructions available at https://www.debian.org/Bugs/Reporting to report bugs to the Debian BTS. Moving forward with higher QA and the Packaging-deb project in Newton
Currently, DefCore packages are tested through a package-only (ie: no puppet, chef, you-name-it system management involved) Tempest CI. Results can be seen at:
https://mitaka-jessie.pkgs.mirantis.com/job/openstack-tempest-ci/ Though not all packages are included in this CI. It is my intention, during the Newton cycle, to also include services like Designate, Trove, Barbican, Congress, in this CI. Individual upstream team for these services are more than welcome to approach us to get this happen quicker. Also, as we re slowing starting to get the Packaging-Deb project (ie: packaging using upstream OpenStack gerrit and gating), it is also in the pipe to use the above mentioned tempest CI system as a gate system for the packaging. Hopefully, this will lead us to a full CI/CD working from trunk. We also hope to be able to use these packages to help the Puppet team to test packaged OpenStack from trunk. Greetings
On each release, I ask myself who I should thank. This time, I would like to thank everyone, because this release was overall very nice and working well. The whole OpenStack community is always very helpful and understand the requirements of downstream distributions. Guys, you re awesome, I love my work, and I love working with you all! Cheers,

30 March 2016

NOKUBI Takatsugu: GAME-ON the historical videogame exhibition

I attended GAME-ON last weedkend at Miraikan, The National Museum of Emerging Science and Innovation in Tokyo. There are many historical videogame hardware and software, and the bland-new PSVR exibition. DEC PDP-1 DisplayI can see the real DEC PDP-1 hardware. Unfortunately it is only haredware, not worked. And also PONG is exibited.
SPACEWAR description

SPACEWAR description

There are many old hardware Atari 2600, Commodore 64, Spectrum, and so on. Apple IIe Atari 2600/VCS Commodore 64 Spectrum Almost exhibition with English description, and officials can talk English, if you feel interested, go Miraikan until 30th May.

26 March 2016

Gunnar Wolf: Yes! I can confirm that...

I am very very (very very very!) happy to confirm that... This year, and after many years of not being able to, I will cross the Atlantic. To do this, I will take my favorite excuse: Attending DebConf! So, yes, this image I am pasting here is as far as you can imagine from official promotional material. But, having bought my plane tickets, I have to start bragging about it ;-) In case it is of use to others (at least, to people from my general geographic roundabouts), I searched for plane tickets straight from Mexico. I was accepting my lack of luck, facing an over-36-hour trip(!!) and at very high prices. Most routes were Mexico- central_europe -Arab Emirates-South Africa... Great for collecting frequent-flier miles, but terrible for anything else. Of course, requesting a more logical route (say, via Sao Paulo in Brazil) resulted in a price hike to over US$3500. Not good. I found out that Mexico-Argentina tickets for that season were quite agreeable at US$800, so I booked our family vacation to visit the relatives, and will fly from there at US$1400. So, yes, in a 48-hr timespan I will do MEX-GRU-ROS, then (by land) Rosario to Buenos Aires, then AEP-GRU-JNB-CPT. But while I am at DebConf, Regina and the kids will be at home with the grandparents and family and friends. In the end, win-win with just an extra bit of jetlag for me ;-) I *really* expect flights to be saner for USians, Europeans, and those coming from further far away. But we have grown to have many Latin Americans, and I hope we can all meet in CPT for the most intense weeks of the year! See you all in South Africa!

6 March 2016

Thorsten Glaser: mksh R52c, paxmirabilis 20160306 released; PA4 paper size PDF manpages

The MirBSD Korn Shell R52c was published today as bugfix-accumulating release of low upto medium importance. Thanks to everyone who helped squashing all those bugs; this includes our bug reporters who always include reproducer testcases; you re wonderful! MirCPIO was also resynchronised from OpenBSD, to address the CVE-2015- 1193,1194 test cases, after a downstream (wow there are so many?) reminded us of it; thanks!
This is mostly to prevent extracting ../foo either directly or from a symlink(7) from actually ending up being placed in the parent directory. As such the severity is medium-high. And it has a page now initially just a landing page / stub; will be fleshed out later. Uploads for both should make their way into Debian very soon (these are the packages mksh and pax). Uploading backports for mksh (jessie and wheezy-sloppy) have been requested by several users, but none of the four(?) DDs asked about sponsoring them even answered at all, and the regular (current) sponsors don t have experience with bpo, so SOL I ve also tweaked a bug in sed(1), in MirBSD. Unfortunately, this means it now comes with the GNUism -i too: don t use it, use ed(1) (much nicer anyway) or perlrun(1) -p/-n Finally, our PDF manpages now use the PA4 paper size instead of DIN ISO A4, meaning they can be printed without cropping or scaling on both A4 and US-american letter paper. And a from the last announcement: we now use Gentium and Inconsolata as body text and monospace fonts, respectively. (And propos, the website ought to be more legible due to text justification and better line spacing now.) I managed to hack this up in GNU groff and Ghostscript, thankfully. (LaTeX too) Currently there are PDF manpages for joe (jupp), mksh, and cpio/pax/tar. And we had Gr nkohl today! Also, new console-setup package in the WTF APT repository since upstream managed to do actual work on it (even fixed some bugs). Read its feed if interested, as its news will not be repeated here usually. (That means, subscribe as there won t be many future reminders in this place.) The netboot.me service appears to be gone. I ll not remove our images, but if someone knows what became of it drop us a message (IRC or mailing list will work just fine). PS: This was originally written on 20160304 but opax refused to be merged in time Happy Birthday, gecko2! In the meantime, the Street Food festival weekend provided wonderful food at BaseCamp, and headache prevented this from being finished on the fifth. Update 06.03.2016: The pax changes were too intrusive, so I decided to only backport the fixes OpenBSD did (both those they mentioned and those silently included), well, the applicable parts of them, anyway, instead. There will be a MirCPIO release completely rebased later after all changes are merged and, more importantly, tested. Another release although not set for immediate future should bring a more sensible (and mksh-like) buildsystem for improved portability (and thus some more changes we had to exclude at first). I ve also cloned the halfwidth part of the FixedMisc [MirOS] font as FixedMiscHW for use with Qt5 applications, xfonts-base in the WTF APT repo. (Debian #809979) tl;dr: mksh R52c (bugfix-only, low-medium); mircpio 20160306 (security backport; high) with future complete rebase (medium) upstream and in Debian. No mksh backports due to lacking a bpo capable sponsor. New console-setup in WTF APT repo, and mksh there as usual. xfonts-base too. netboot.me gone?

1 March 2016

Russ Allbery: Review: The Lost Steersman

Review: The Lost Steersman, by Rosemary Kirstein
Series: Steerswomen #3
Publisher: Rosemary Kirstein
Copyright: 2003, 2014
Printing: 2014
ISBN: 0-9913546-2-1
Format: Kindle
Pages: 432
This is the third book in the Steerswomen series and a direct follow-up to the events of The Outskirter's Secret. It does, marvel of marvels, feature an in-character summary of the events of the series to date! I do love when authors do this; it helps immensely if you come back to a series after a bit of a break between books. But this whole series is so good, and the emotional tone and development of Rowan as a character is so strong, that I recommend against starting here. After the events of the last book, Rowan has returned to the Inner Lands. She's sent her report back to the Archives, but stopped at the Annex in Alemeth. This is an auxiliary library that should have copies of the journals and other research that Rowan wants to search, and stopping there saves substantial travel time. However, she finds the steerswoman who was custodian of the Annex is deceased and the Annex is, from Rowan's perspective, a mess. Nothing is organized, the books aren't properly cared-for, and Mira's interactions with the townsfolk were far different than Rowan's natural attitude. The start of this book was a surprising shift. After the large-scale revelations at the end of The Outskirter's Secret, and the sense of escalating danger, Rowan's return to small-town life in the Inner Lands comes as a shock. That's true for both the reader and for Rowan, and the parallels make it a remarkably effective bit of writing. At the beginning of the story, the reader is already familiar with Rowan (at least if you've read the previous books) and how she thinks of being a steerswoman. Rowan is very much on edge and in a hurry given what's going on in the broader world. But the town is used to Mira: a gregarious socializer who cared far more about town gossip and her role as coordinator of it than she cared about most of her steerswoman duties. (At least as seen from Rowan's perspective. By the end of the book, we have a few hints that something else might be going on, but the damage to the books at least feels unforgivable.) Rowan is resented and even disliked at first, particularly by Steffie and Gwen who did most of the chores at the Annex and were closest to Mira. One of the reasons why I love this series so much is that Kirstein has a gift for characterization. Rowan (and Bel, who largely doesn't appear in this book) are brilliant characters, but it's not just them. At the start of this book, the reader tends to share Rowan's opinion of the town: a sort of half-bemused, half-exasperated indifference. Even as the characters start to grow on one, it seems like a backwater and a diversion from the larger story. But it becomes clear that Rowan is very on-edge from her experience in the Outskirts, that she's underestimated the relevance of Mira to at least the town's happiness, and she's greatly underestimated the ability of the townsfolk to help her. Steffie, in particular, is a wonderful character; by the end of the book, he had become one of my favorite people in the series so far. He doesn't think he's particularly smart, and his life before Rowan is very simple, but there are depths to him that no one, including him, expected. There is a plot here, apart from small town politics and Rowan's slow relaxation. (Although those were so compelling that I'm not sure I would have minded if that were the entire book.) The lost steersman of the title is an old student friend of Rowan that she unexpectedly meets in town, a former steersman who quit the order and refused to explain why. Rowan, of course, cannot resist trying to fix this situation. The second plot driver is a dangerous invasion of Outskirts monsters into the town. Those who have read the previous books will have some immediate guesses as to why this might be, and Rowan does as well. But there's more going on than it might first appear. This book is not entirely a diversion. It returns to the main plot of the series by the end of the book, and we learn much more about Rowan's world. But, somewhat surprisingly, that was my least favorite part of the book. It has some nice bits of exploration and puzzle-solving, and Rowan is always a delight to spend time with. But the last section of the book is similar to many other genre books I've read before well-written, to be sure, but not as unique. It also features a rather long section following a character who is severely physically ill, which is something I always find very hard to read (a personal quirk). But there's a lot of meat here for the broader plot, and I have no idea what will happen in the next book. The part of The Lost Steersman that I'm going to remember, though, are the town bits, up through the arrival of Zenna (another delightful character who adds even more variety to Kirstein's presentation of steerswomen). Kirstein is remarkably good at mixing small-town characters with the scientific investigation of the steerswomen and letting them bounce off of each other to reveal more about the character of both. If it weren't for the end of the book, which bothered me for partly idiosyncratic reasons, I think this would have been my favorite book of the series. Followed by The Language of Power, and be warned that this book ends on something close to a cliffhanger. Rating: 8 out of 10

14 February 2016

Lunar: Reproducible builds: week 42 in Stretch cycle

What happened in the reproducible builds effort between February 7th and February 13th 2016:

Toolchain fixes
  • James McCoy uploaded devscripts/2.16.1 which makes dcmd supports .buildinfo files. Original patch by josch.
  • Lisandro Dami n Nicanor P rez Meyer uploaded qt4-x11/4:4.8.7+dfsg-6 which make files created by qch reproducible by using a fixed date instead of the current time. Original patch by Dhole.
Norbert Preining rejected the patch submitted by Reiner Herrmann to make the CreationDate not appear in comments of DVI / PS files produced by TeX. He also mentioned that some timestamps can be replaced by using the -output-comment option and that the next version of pdftex will have patches inspired by reproducible build to mitigate the effects (see SOURCE_DATE_EPOCH patches) .

Packages fixed The following packages have become reproducible due to changes in their build dependencies: abntex, apt-dpkg-ref, arduino, c++-annotations, cfi, chaksem, clif, cppreference-doc, dejagnu, derivations, ecasound, fdutils, gnash, gnu-standards, gnuift, gsequencer, gss, gstreamer0.10, gstreamer1.0, harden-doc, haskell98-report, iproute2, java-policy, libbluray, libmodbus, lizardfs, mclibs, moon-buggy, nurpawiki, php-sasl, shishi, stealth, xmltex, xsom. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them: Patches submitted which have not made their way to the archive yet:
  • #813944 on cvm by Reiner Herrmann: remove gzip headers, fix permissions of some directories and the order of the md5sums.
  • #814019 on latexdiff by Reiner Herrmann: remove the current build date from documentation.
  • #814214 on rocksdb by Chris Lamb: add support for SOURCE_DATE_EPOCH.

reproducible.debian.net A new armhf build node has been added (thanks to Vagrant Cascadian) and integrated into the Jenkins setup for 4 new armhf builder jobs. (h01ger) All packages for Debian testing (Stretch) have been tested on armhf in just 42 days. It took 114 days to get the same point for unstable back when the armhf test infrastructure was much smaller. Package sets have been enabled for testing on armhf. (h01ger) Packages producing architecture-independent ( Arch:all ) binary packages together with architecture dependent packages targeted for specific architectures will now only be tested on matching architectures. (Steven Chamberlain, h01ger) As the Jenkins setup is now made of 252 different jobs, the overview has been split into 11 different smalller views. (h01ger)

Package reviews 222 reviews have been removed, 110 added and 50 updated in the previous week. 35 FTBFS reports were made by Chris Lamb, Danny Edel, and Niko Tyni.

Misc. The recordings of Ludovic Court s' talk at FOSDEM 16 about reproducible builds and GNU Guix is now available. One can also have a look at slides from Fabian Keil's talk about ElecrtroBSD and Baptiste Daroussin's talk about FreeBSD packages.

4 February 2016

Petter Reinholdtsen: Using appstream in Debian to locate packages with firmware and mime type support

The appstream system is taking shape in Debian, and one provided feature is a very convenient way to tell you which package to install to make a given firmware file available when the kernel is looking for it. This can be done using apt-file too, but that is for someone else to blog about. :) Here is a small recipe to find the package with a given firmware file, in this example I am looking for ctfw-3.2.3.0.bin, randomly picked from the set of firmware announced using appstream in Debian unstable. In general you would be looking for the firmware requested by the kernel during kernel module loading. To find the package providing the example file, do like this:
% apt install appstream
[...]
% apt update
[...]
% appstreamcli what-provides firmware:runtime ctfw-3.2.3.0.bin   \
  awk '/Package:/  print $2 '
firmware-qlogic
%
See the appstream wiki page to learn how to embed the package metadata in a way appstream can use. This same approach can be used to find any package supporting a given MIME type. This is very useful when you get a file you do not know how to handle. First find the mime type using file --mime-type, and next look up the package providing support for it. Lets say you got an SVG file. Its MIME type is image/svg+xml, and you can find all packages handling this type like this:
% apt install appstream
[...]
% apt update
[...]
% appstreamcli what-provides mimetype image/svg+xml   \
  awk '/Package:/  print $2 '
bkchem
phototonic
inkscape
shutter
tetzle
geeqie
xia
pinta
gthumb
karbon
comix
mirage
viewnior
postr
ristretto
kolourpaint4
eog
eom
gimagereader
midori
%
I believe the MIME types are fetched from the desktop file for packages providing appstream metadata.

19 January 2016

Joey Hess: git-annex v6

Version 6 of git-annex, released last week, adds a major new feature; support for unlocked large files that can be edited as usual and committed using regular git commands. For example:
git init
git annex init --version=6
mv ~/foo.iso .
git add foo.iso
git commit -m "added hundreds of megabytes to git annex (not git)"
git remote add origin ssh://sever/dir
git annex sync origin --content # uploads foo.iso
Compare that with how git-annex has worked from the beginning, where git annex add is used to add a file, and then the file is locked, preventing further modifications of it. That is still a very useful way to use git-annex for many kinds of files, and is still supported of course. Indeed, you can easily switch files back and forth between being locked and unlocked. This new unlocked file mode uses git's smudge/clean filters, and I was busy developing it all through December. It started out playing catch-up with git-lfs somewhat, but has significantly surpassed it now in several ways. So, if you had tried git-annex before, but found it didn't meet your needs, you may want to give it another look now.
Now a few thoughts on git-annex vs git-lfs, and different tradeoffs made by them. After trying it out, my feeling is that git-lfs brings an admirable simplicity to using git with large files. File contents are automatically uploaded to the server when a git branch is pushed, and downloaded when a branch is merged, and after setting it up, the user may not need to change their git workflow at all to use git-lfs. But there are some serious costs to that simplicity. git-lfs is a centralized system. This is especially problimatic when dealing with large files. Being a decentralized system, git-annex has a lot more flexability, like transferring large file contents peer-to-peer over a LAN, and being able to choose where large quantities of data are stored (maybe in S3, maybe on a local archive disk, etc). The price git-annex pays for this flexability is you have to configure it, and run some additional commands. And, it has to keep track of what content is located where, since it can't assume the answer is "in the central server". The simplicity of git-lfs also means that the user doesn't have much control over what files are present in their checkout of a repository. git-lfs downloads all the files in the work tree. It doesn't have facilities for dropping the content of some files to free up space, or for configuring a repository to only want to get a subset of files in the first place. On the other hand, git-annex has excellent support for all those things, and this comes largely for free from its decentralized design. If git has showed us anything, it's perhaps that a little added complexity to support a fully distributed system won't prevent people using it. Even if many of them end up using it in a mostly centralized way. And that being decentralized can have benefits beyond the obvious ones.
Oh yeah, one other advantage of git-annex over git-lfs. It can use half as much disk space! A clone of a git-lfs repository contains one copy of each file in the work tree. Since the user can edit that file at any time, or checking out a different branch can delete the file, it also stashes a copy inside .git/lfs/objects/. One of the main reasons git-annex used locked files, from the very beginning, was to avoid that second copy. A second local copy of a large file can be too expensive to put up with. When I added unlocked files in git-annex v6, I found it needed a second copy of them, same as git-lfs does. That's the default behavior. But, I decided to complicate git-annex with a config setting:
git config annex.thin true
git annex fix
Run those two commands, and now only one copy is needed for unlocked files! How's it work? Well, it comes down to hard links. But there is a tradeoff here, which is why this is not the default: When you edit a file, no local backup is preserved of its old content. So you have to make sure to let git-annex upload files to another repository before editing them or the old version could get lost. So it's a tradeoff, and maybe it could be improved. (Only thin out a file after a copy has been uploaded?) This adds a small amount of complexity to git-annex, but I feel it's well worth it to let unlocked files use half the disk space. If the git-lfs developers are reading this, that would probably be my first suggestion for a feature to consider adding to git-lfs. I hope for more opportunities to catch-up to git-lfs in turn.

7 January 2016

Thorsten Glaser: git find published; test, review, fix it please

I just published the first version of git find on gh/mirabilos/git-find for easy collaboration. The repository deliberately only contains the script and the manual page so it can easily be merged into git.git with complete history later, should they accept it. git find is MirOS licenced. It does require a recent mksh (Update: I did start it in POSIX sh first, but it eventually turned out to require arrays, and I don t know perl(1) and am not going to rewrite it in C) and some common utility extensions to deal with NUL-separated lines (sort -z, grep -z, git ls-tree -z); also, support for '\0' in tr(1) and a comm(1) that does not choke on embedded NULs in lines.

To install or uninstall it, run

	$ git clone git@github.com:mirabilos/git-find.git
	$ cd git-find
	$ sudo ln -sf $PWD/git-find /usr/lib/git-core/
	$ sudo cp git-find.1 /usr/local/share/man/man1/
	  hack  
	$ sudo rm /usr/lib/git-core/git-find \
	    /usr/local/share/man/man1/git-find.1

then you can call it as git find and look at the documentation with git help find , as is customary. The idea behind this utility is to have a tool like git grep that acts on the list of files known to git (and not e.g. ignored files) to quickly search for, say, all PNG files in the repository (but not the generated ones). git find acts on the index for the HEAD, i.e. whatever commit is currently checked-out (unlike git grep which also knows about git add ed files; fix welcome) and then offers a filter syntax similar to find(1) to follow up: parenthes s, ! for negation, -a and -o for boolean are supported, as well as -name, -regex and -wholename and their case-insensitive variants, although regex uses grep(1) without (or, if the global option -E is given, with) -E, and the pattern matches use mksh(1) s, which ignores the locale and doesn t do [[:alpha:]] character classes yet. On the plus side, the output is guaranteed to be sorted; on the minus side, it is rather wastefully using temporary files (under $TMPDIR of course, so use of tmpfs is recommended). -print0 is the only output option (-print being the default). Another mode forwards the file list to the system find; since it doesn t support DOS-style response files, this only works if the amount of files is smaller than the operating system s limit; this mode supports the full range (except -maxdepth) of the system find(1) filters, e.g. -mmin -1 and -ls, but it occurs filesystem access penalty for the entire tree and doesn t sort the output, but can do -ls or even -exec. The idea here is that it can collaboratively be improved, reviewed, fixed, etc. and then, should they agree, with the entire history, subtree-merged into git.git and shipped to the world. Part of the development was sponsored by tarent solutions GmbH, the rest and the entire manual page were done in my vacation.

1 January 2016

Bdale Garbee: Term Limited

I woke up this morning and realized that for the first time since 17 April 2001, I am no longer a member of the Debian Technical Committee. My departure from the committee is a consequence of the Debian General Resolution "limiting the term of the technical committee members" that was passed amending the Debian Constitution nearly a year ago. As the two longest-serving members, both over the term limit, Steve Langasek and I completed our service yesterday. In early March 2015, I stepped down from the role of chairman after serving in that role for the better part of a decade, to help ensure a smooth transition. Don Armstrong is now serving admirably in that role, I have the utmost respect for the remaining members of the TC, and the process of nominating replacements for the two now-vacant seats is already well underway. So, for the Debian project as a whole, today is really a non-event... which is exactly as it should be! Debian has been a part of my life since 1994, and I sincerely hope to be able to remain involved for many years to come!

31 December 2015

Laura Arjona Reina: Thanks Ian, thanks Debian

I didn t know Ian Murdock but the news about his passing left me with a very strange and sad feeling, because he started the project that creates the tool that I use every day in my work, and everyday in my communication with my family and friends, and everyday for anything computer related It s like if somebody puts a treasure in your hands and you got distracted looking at it and when you head up to look at the person and say Thank you , he s gone And, in the last years, Debian for me is not just my favorite tool , I ve been slowly getting involved in the community, known some people here and there, been able to put some work to try to improve some small parts, been able to work with other people as a team, and I ve been touched many times admiring how the Debianers work, how they talk and write, how they behave to each other and to the ones that reach the community for first time, and to the world, since most of the communication and work is public I ve felt myself helped, welcomed, encouraged, empowered. Not only in my computer related skills or the improved capabilities of my humble hardware. I ve felt myself helped, welcomed, encouraged and empowered in important areas of my life (understanding other points of view, caring about the ones that don t speak aloud, enjoying diversity and becoming flexible to make it flourish, making friends ). And I like to think that I try to emulate that and help, welcome, encourage, empower others too I m learning. Thanks Ian, for this alive and growing treasure that is Debian (the OS, the community), and thanks Debian, for the past, present and future miracles.
Filed under: My experiences and opinion Tagged: Communities, Contributing to libre software, Debian, English, Moving into free software

15 December 2015

Thomas Goirand: OpenStack: Mitaka beta 1 packages available, Liberty uploaded to Jessie Backports

OpenStack Mitaka beta 1 Debian packages available I didn t find the time to announce it until today, though I have finished last Friday to package Mitaka Beta 1. It is available, as usual, on the Jenkins server Debian repository: deb http://mitaka-jessie.pkgs.mirantis.com/debian jessie-mitaka-backports main
deb-src http://mitaka-jessie.pkgs.mirantis.com/debian jessie-mitaka-backports main
deb http://mitaka-jessie.pkgs.mirantis.com/debian jessie-mitaka-backports-nochange main
deb-src http://mitaka-jessie.pkgs.mirantis.com/debian jessie-mitaka-backports-nochange main Not all of the updated packages avialable above has been uploaded to Debian Experimental, mostly those needing to pass the FTP master NEW queue did. I will upload the rest as I find enough time to do so, which unfortunately may not happen before Mitaka b2 (which will be in the middle of January). OpenStack Liberty uploaded to jessie-backports Also, as python-repoze.who 2.x finally could migrate to Debian testing (after filed to be removed dependencies got removed by the FTP masters), python-pysaml2 3.0, and then Keystone also did. So this week-end, all of OpenStack Liberty reached testing. So today, I could finally upload all of OpenStack liberty in the official jessie-backports repository. This is 165 packages that I uploaded, out of which 135 are going through the backports NEW queue. I m sorry to give that much work to the FTP masters, but most OpenStack users do want to use the latest release of OpenStack on top of the latest stable distributions. So this upload really is what OpenStack Debian user will prefer (until we have PPA^Wbikesheds for Debian).

28 October 2015

Martin-Éric Racine: xf86-video-geode: Last call, dernier s vice

I guess that the time has finally come to admit that, as far as upstream development is concerned, the Geode X.Org driver is reaching retirement age:
While there have indeed been recent contributions by a number of developers to keep it compilable against recent X releases, the Geode driver has accumulated too much cruft from the Cyrix and NSC days, and it hasn't seen any active contribution from AMD in a long time. Besides, nowadays, Xserver pretty much assumes that its runs on an X driver that leverages its matching kernel driver and thus won't require root priviledges to launch. This isn't the case with the Geode driver, since it directly probes FBDEV and MSR, both of which reside in /dev and require root priviledges to access.
On Debian, as a stopgap measure, the package now Recommends a legacy wrapper that enforces operation as root. Meanwhile, other distributions are mercilessly droping all X drivers that don't leverage KMS. Basically, unless a miracle happens really quick, Geode will soon become unusable on X.
Back when AMD was still involved, a concensus had been reached that, since the Geode series doesn't offer any sort of advanced graphic capabilities, the most sensible option would indeed be to make a KMS driver and let Xserver use its generic modeline driver on top of that, then drop the Geode X driver entirely. Amazingly enough, someone did start working on a KMS driver for Geode LX, but it never made it as far as the Linux kernel tree (additionally, Gitorious seems to be down, but I have a copy of the driver's Git tree on hand, if anyone is interested). While I'll still be accepting and merging patches to the Geode X driver, our best long-term option would be to finalize the KMS driver and have it merged into Linux ASAP.

13 October 2015

Julien Danjou: Benchmarking Gnocchi for fun & profit

We got pretty good feedback on Gnocchi so far, even if we only had little. Recently, in order to have a better feeling of where we were at, we wanted to know how fast (or slow) Gnocchi was. The early benchmarks that some of the Mirantis engineers ran last year showed pretty good signs. But a year later, it was time to get real numbers and have a good understanding of Gnocchi capacity. Benchmark tools The first thing I realized when starting that process, is that we were lacking of tools to run benchmarks. Therefore I started to write some benchmark tools in python-gnocchiclient, which provides a command line tool to interrogate Gnocchi. I added a few basic commands to measure metric performance, such as:
$ gnocchi benchmark metric create -w 48 -n 10000 -a low
+----------------------+------------------+
Field Value
+----------------------+------------------+
client workers 48
create executed 10000
create failures 0
create failures rate 0.00 %
create runtime 8.80 seconds
create speed 1136.96 create/s
delete executed 10000
delete failures 0
delete failures rate 0.00 %
delete runtime 39.56 seconds
delete speed 252.75 delete/s
+----------------------+------------------+

The command line tool supports the --verbose switch to have detailed progress report on the benchmark progression. So far it supports metric operations only, but that's the most interesting part of Gnocchi. Spinning up some hardware I got a couple of bare metal servers to test Gnocchi on. I dedicated the first one to Gnocchi, and used the second one as the benchmark client, plugged on the same network. Each server is made of 2 Intel Xeon E5-2609 v3 (12 cores in total) and 32 GB of RAM. That provides a lot of CPU to handle requests in parallel. Then I simply performed a basic RHEL 7 installation and ran devstack to spin up an installation of Gnocchi based on the master branch, disabling all of the others OpenStack components. I then tweaked the Apache httpd configuration to use the worker MPM and increased the maximum number of clients that can sent request simultaneously. I configured Gnocchi to use the PostsgreSQL indexer, as it's the recommended one, and the file storage driver, based on Carbonara (Gnocchi own storage engine). That means files were stored locally rather than in Ceph or Swift. Using the file driver is less scalable (you have to run on only one node or uses a technology like NFS to share the files), but it was good enough for this benchmark and to have some numbers and profiling the beast. The OpenStack Keystone authentication middleware was not enabled in this setup, as it would add some delay validating the authentication token. Metric CRUD operations Metric creation is pretty fast. I managed to attain 1500 metric/s created pretty easily. Deletion is now asynchronous, which means it's faster than in Gnocchi 1.2, but it's still slower than creation: 300 metric/s can be deleted. That does not sound like a huge issue since metric deletion is actually barely used in production. Retrieving metric information is also pretty fast and goes up to 800 metric/s. It'd be easy to achieve very higher throughput for this one, as it'd be easy to cache, but we didn't feel the need to implement it so far. Another important thing is that all of these numbers are constant and barely depends on the number of the metric already managed by Gnocchi.
Operation Details Rate
Create metric Created 100k metrics in 77 seconds 1300 metric/s
Delete metric Deleted 100k metrics in 190 seconds 524 metric/s
Show metric Show a metric 100k times in 149 seconds 670 metric/s
Sending and getting measures Pushing measures into metrics is one of the hottest topic. Starting with Gnocchi 1.1, the measures pushed are treated asynchronously, which makes it much faster to push new measures. Getting new numbers on that feature was pretty interesting. The number of metric per second you can push depends on the batch size, meaning the number of actual measurements you send per call. The naive approach is to push 1 measure per call, and in that case, Gnocchi is able to handle around 600 measures/s. With a batch containing 100 measures, the number of calls per second goes down to 450, but since you push 100 measures each time, that means 45k measures per second pushed into Gnocchi! I've pushed the test further, inspired by the recent blog post of InfluxDB claiming to achieve 300k points per second with their new engine. I ran the same benchmark on the hardware I had, which is roughly two times smaller than the one they used. I achieved to push Gnocchi to a little more than 120k measurement per second. If I had same hardware as they used, I could interpolate the results to achieve almost 250k measures/s pushed. Obviously, you can't strictly compare Gnocchi and InfluxDB since they are not doing exactly the same thing, but it still looks way better than what I expected. Using smaller batch sizes of 1k or 2k improve the throughput further to around 125k measures/s.
Operation Details Rate
Push metric 5k Push 5M measures with batch of 5k measures in 40 seconds 122k measures/s
Push metric 4k Push 5M measures with batch of 4k measures in 40 seconds 125k measures/s
Push metric 3k Push 5M measures with batch of 3k measures in 40 seconds 123k measures/s
Push metric 2k Push 5M measures with batch of 2k measures in 41 seconds 121k measures/s
Push metric 1k Push 5M measures with batch of 1k measures in 44 seconds 113k measures/s
Push metric 500 Push 5M measures with batch of 500 measures in 51 seconds 98k measures/s
Push metric 100 Push 5M measures with batch of 100 measures in 112 seconds 45k measures/s
Push metric 10 Push 5M measures with batch of 10 measures in 852 seconds 6k measures/s
Push metric 1 Push 500k measures with batch of 1 measure in 800 seconds 624 measures/s
Get measures Push 43k measures of 1 metric 260k measures/s
What about getting measures? Well, it's actually pretty fast too. Retrieving a metric with 1 month of data with 1 minute interval (that's 43k points) takes less than 2 second. Though it's actually slower than what I expected. The reason seems to be that the JSON is 2 MB big and encoding it takes a lot of time for Python. I'll investigate that. Another point I discovered, is that by default Gnocchi returns all the datapoints for each granularities available for the asked period, which might double the size of the returned data for nothing if you don't need it. It'll be easy to add an option to the API to only retrieve what you need though! Once benchmarked, that meant I was able to retrieve 6 metric/s per second, which translates to around 260k measures/s. Metricd speed New measures that are pushed into Gnocchi are processed asynchronously by the gnocchi-metricd daemon. When doing the benchmarks above, I ran into a very interesting issue: sending 10k measures on a metric would make gnocchi-metricd uses up to 2 GB RAM and 120 % CPU for more than 10 minutes. After further investigation, I found that the naive approach we used to resample datapoints in Carbonara using Pandas was causing that. I reported a bug on Pandas and the upstream author was kind enough to provide a nice workaround, that I sent as a pull request to Pandas documentation. I wrote a fix for Gnocchi based on that, and started using it. Computing the standard aggregation methods set (std, count, 95pct, min, max, sum, median, mean) for 10k batches of 1 measure (worst case scenario) for one metric with 10k measures now takes only 20 seconds and uses 100 MB of RAM 45 faster. That means that in normal operations, where only a few new measures are processed, the operation of updating a metric only takes a few milliseconds. Awesome! Comparison with Ceilometer For comparison sake, I've quickly run some read operations benchmark in Ceilometer. I've fed it with one month of samples for 100 instances polled every minute. That represents roughly 4.3M samples injected, and that took a while almost 1 hour whereas it would have taken less than a minute in Gnocchi. Then I tried to retrieve some statistics in the same way that we provide them in Gnocchi, which mean aggregating them over a period of 60 seconds over a month.
Operation Details Rate
Read metric SQL Read measures for 1 metric 2min 58s
Read metric MongoDB Read measures for 1 metric 28s
Read metric Gnocchi Read measures for 1 metric 2s
Obviously, Ceilometer is very slow. It has to look into 4M of samples to compute and return the result, which takes a lot of time. Whereas Gnocchi just has to fetch a file and pass it over. That also means that the more samples you have (so the more time you collect data and the more resources you have), slower Ceilometer will become. This is not a problem with Gnocchi, as I emphasized when I started designing it. Most Gnocchi operations are O(log R) where R is the number of metrics or resources, whereas most Ceilometer operations are O(log S) where S is the number of samples (measures). Since is R millions of time smaller than S, Gnocchi gets to be much faster. And what's even more interesting, is that Gnocchi is entirely scalable horizontally. Adding more Gnocchi servers (for the API and its background processing worker metricd) will multiply Gnocchi performances by the number of servers added. Improvements There are several things to improve in Gnocchi, such as splitting Carbonara archives to make them more efficient, especially from drivers such as Ceph and Swift. It's already on my plate, and I'm looking forwarding to working on that! And if you have any questions, feel free to shoot them in the comment section.

10 July 2015

Gunnar Wolf: Finishing the course on "Free Software and Open Standards"

A couple of months ago, I was invited to give the starting course for the Masters degree in Free Software in the Universidad Andina Sim n Bol var university. UASB is a multinational university, with campuses in (at least) Ecuador, Chile, Bolivia and Colombia; I was doubtful at first regarding the seriousness of this proposal and the viability of the program, but time made my doubts disappear. Bolivia is going through an interesting process, as they have one of the strongest worded government mandates for migration to free software for the public administration in the next couple of years; this migration has prompted the interest of many professionals in the country. In particular, we have over 40 registered people for this Masters degree. Studying a Masters degree is a long-term commitment which signifies a big time investment, and although many of the student are quite new to the idea of free software, they are willing to spend this time (and money, as the university is privately owned and charges for its enrollment). I gave this class together with Alejandro Miranda (a.k.a. @pooka), as we have a very good pair-teaching dynamics; we had already given many conferences together, but this is the first time we had the opportunity to share a whole course and the experience was very good. We have read the students' logs, and many of them clearly agree with this. I had to skip two of the (ten) lessons, as I travelled from Mexico to Argentina halfway through it (of course, we brought the babies to meet my wife's family and friends!), so we had also the honor of having Esteban Lima fill in for those sessions. I am very happy and grateful that the University took care to record our presentations and intend to record and put online all of the classes; as we were the first in the program, there were some understandable hiccups and some sessions were lost, but most are available. Here they are, in case you are interested in refering to them:
Topic Video (my server) Video (Youtube)
Introduction to free software Watch Watch
History Watch Watch
Free culture N/A N/A
The effects of free software Watch Watch
Free software and open standards related to technologic soverignity Watch Watch
The free software ecosystem Watch Watch
Free software implementation in Bolivia Watch Watch
Introduction to intelectual property: Copyright, patents, trademarks, etc. Watch Watch
Who is "the community" and why do we speak about it? Watch Watch
Current status and challenges for the movement N/A N/A
We have yet another video file (which I have not fully followed through) titled ADSIB - Migration plan. It can also be downloaded from my server or watched online at Youtube. All in all: This was a great opportunity and a joy to do. I think the material we used and developed fit well what was expected from us, and we had fun giving somewhat heterodox readings on our movement.

Next.

Previous.